Artificial Neural Networks

نویسنده

  • Jarod Hart
چکیده

Artificial Neural Networks (ANNs) are machine learning algorithms that imitate neural function. There is a vast theory of ANNs available these days. To limit the scope and length of this project summary, we will only consider three ANN algorithms, which are typically the first ones introduced when starting to work with ANNs. They are the Perceptron, Adaline, and multi-layer feedforward networks. They are all supervised learning algorithms, meaning that they train their parameters based on a set of pre-classified (or labeled) training data. That is, each element of the training data has a known label associated to it. The goal is to learn how to classify novel input according to a rule learned from the training data. We should acknowledge that calling the Perceptron and Adaline models neural networks may be overstating their complexity. Indeed these models are only formulated based on only a single neuron. Hence they could be viewed as a trivial network, but it may be more conducive to think of them as a primer for building mathematical models of neurons. Their construction introduces some foundational ideas of the theory, and they make it much more manageable to work with more complicated networks, like multi-layer networks. There are ways to accomplish more complicated tasks by preprocessing training data and/or using several Perceptrons/Adaline neurons, but even in these models the neurons still function largely independently (not in a coordinated way inherent to more complicated ANN models). We mention a couple ways to extend these models in the Possible Extensions section. There is an important development in the transition from the Perceptron to the Adaline model. Roughly speaking, the Perceptron learns by using a somewhat ad hoc training rule that can be motivated by a geometric argument. The training is a little cumbersome and is reliant on a linear structure in some ways. This makes it difficult to extend directly to more complicated and nonlinear models. Adaline introduces a shift in point of which, which is to train the neuron by minimizing an error function. This notion of minimization in place of a more geometric argument is much more easily extended to more complicated settings, which can be observed in the construction of the multi-layer networks. Much of the information presented here is taken from Mitchell’s book on machine learning, but several aspects are presented differently and at times more concretely. In particular, the details of the learning rules here are laid out in more detailed but less general terms. This description is also much less comprehensive, which allows a much shorter presentation of the material. This may be of use for those just starting to work with ANNs, but would probably be best used in along with other resources.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prediction the Return Fluctuations with Artificial Neural Networks' Approach

Time changes of return, inefficiency studies performed and presence of effective factors on share return rate are caused development modern and intelligent methods in estimation and evaluation of share return in stock companies. Aim of this research is prediction of return using financial variables with artificial neural network approach. Therefore, the statistical population of this study incl...

متن کامل

HYBRID ARTIFICIAL NEURAL NETWORKS BASED ON ACO-RPROP FOR GENERATING MULTIPLE SPECTRUM-COMPATIBLE ARTIFICIAL EARTHQUAKE RECORDS FOR SPECIFIED SITE GEOLOGY

The main objective of this paper is to use ant optimized neural networks to generate artificial earthquake records. In this regard, training accelerograms selected according to the site geology of recorder station and Wavelet Packet Transform (WPT) used to decompose these records. Then Artificial Neural Networks (ANN) optimized with Ant Colony Optimization and resilient Backpropagation algorith...

متن کامل

Prediction of breeding values for the milk production trait in Iranian Holstein cows applying artificial neural networks

The artificial neural networks, the learning algorithms and mathematical models mimicking the information processing ability of human brain can be used non-linear and complex data. The aim of this study was to predict the breeding values for milk production trait in Iranian Holstein cows applying artificial neural networks. Data on 35167 Iranian Holstein cows recorded between 1998 to 2009 were ...

متن کامل

On the convergence speed of artificial neural networks in‎ ‎the solving of linear ‎systems

‎Artificial neural networks have the advantages such as learning, ‎adaptation‎, ‎fault-tolerance‎, ‎parallelism and generalization‎. ‎This ‎paper is a scrutiny on the application of diverse learning methods‎ ‎in speed of convergence in neural networks‎. ‎For this aim‎, ‎first we ‎introduce a perceptron method based on artificial neural networks‎ ‎which has been applied for solving a non-singula...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017